AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Original Attention Pooling

# Original Attention Pooling

Vit Large Patch16 Siglip 384.webli
Apache-2.0
A vision Transformer model based on SigLIP, containing only the image encoder, using original attention pooling, suitable for image feature extraction tasks.
Image Classification Transformers
V
timm
64
0
Vit Base Patch16 Siglip 384.webli
Apache-2.0
Vision Transformer model based on SigLIP, containing only the image encoder part, using original attention pooling mechanism
Image Classification Transformers
V
timm
64
1
Vit So400m Patch14 Siglip 224.webli
Apache-2.0
Vision Transformer model based on SigLIP, containing only the image encoder part, utilizing original attention pooling mechanism
Image Classification Transformers
V
timm
123
1
Vit Base Patch16 Siglip 224.webli
Apache-2.0
Vision Transformer model based on SigLIP, containing only the image encoder part, using original attention pooling mechanism
Image Classification Transformers
V
timm
330
1
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase